Cohn and Umans proposed a framework for developing fast matrix multiplication algorithms based on the embedding computation in certain groups algebras. In subsequent work with Kleinberg and Szegedy, they connected this to the search for combinatorial objects called strong uniquely solvable puzzles (strong USPs). We begin a systematic computer-aided search for these objects. We develop and implement constraint-based algorithms build on reductions to $\mathrm{SAT}$ and $\mathrm{IP}$ to verify that puzzles are strong USPs, and to search for large strong USPs. We produce tight bounds on the maximum size of a strong USP for width $k \le 5$, construct puzzles of small width that are larger than previous work, and improve the upper bounds on strong USP size for $k \le 12$. Although our work only deals with puzzles of small-constant width, the strong USPs we find imply matrix multiplication algorithms that run in $O(n^\omega)$ time with exponent $\omega \le 2.66$. While our algorithms do not beat the fastest algorithms, our work provides evidence and, perhaps, a path to finding families of strong USPs that imply matrix multiplication algorithms that are more efficient than those currently known.
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
尽管进行了数十年的研究,但现有的导航系统在野外部署时仍然面临现实世界中的挑战,例如在混乱的家庭环境或人类占领的公共场所中。为了解决这个问题,我们提出了一类新的隐式控制政策,将模仿学习的好处与模型预测控制(MPC)的系统约束的强大处理结合在一起。我们的方法称为Performer-MPC,使用了通过表演者提供的视觉上下文嵌入的学习成本函数(一种低级隐式意见变压器)。我们共同训练成本函数并构建依靠它的控制器,有效地端到端解决相应的双层优化问题。我们表明,由此产生的策略通过利用一些在不同挑战的现实世界情景中利用一些专家演示来提高标准MPC绩效。与标准的MPC政策相比,表演者MPC在混乱的环境中实现了40%的目标,而在人类浏览时,社交指标的目标> 65%。
translated by 谷歌翻译
已经发现深层图像分类器可以从数据集中学习偏差。为了减轻偏见,大多数以前的方法都需要标签受保护的属性(例如,年龄,肤色)为全套,这有两个限制:1)当标签不可用时,它是不可行的; 2)它们无法缓解未知的偏见 - 人类没有先入为主的偏见。为了解决这些问题,我们提出了偏见的替代网络(Debian),该网络包括两个网络 - 一个发现者和一个分类器。通过以另一种方式培训,发现者试图找到分类器的多个未知偏见,而无需任何偏见注释,分类器的目的是删除发现者确定的偏见。虽然先前的作品评估了单个偏差的结果,但我们创建了多色MNIST数据集,以更好地缓解多偏差设置中的多个偏差,这不仅揭示了以前的方法中的问题,而且还展示了Debian的优势。在同时识别和减轻多种偏见时。我们进一步对现实世界数据集进行了广泛的实验,表明Debian中的发现者可以识别人类可能很难找到的未知偏见。关于辩护,Debian实现了强烈的偏见缓解绩效。
translated by 谷歌翻译
大型语言模型可以编码有关世界的大量语义知识。这种知识对于旨在采取自然语言表达的高级,时间扩展的指示的机器人可能非常有用。但是,语言模型的一个重大弱点是,它们缺乏现实世界的经验,这使得很难利用它们在给定的体现中进行决策。例如,要求语言模型描述如何清洁溢出物可能会导致合理的叙述,但是它可能不适用于需要在特定环境中执行此任务的特定代理商(例如机器人)。我们建议通过预处理的技能来提供现实世界的基础,这些技能用于限制模型以提出可行且在上下文上适当的自然语言动作。机器人可以充当语​​言模型的“手和眼睛”,而语言模型可以提供有关任务的高级语义知识。我们展示了如何将低级技能与大语言模型结合在一起,以便语言模型提供有关执行复杂和时间扩展说明的过程的高级知识,而与这些技能相关的价值功能则提供了连接必要的基础了解特定的物理环境。我们在许多现实世界的机器人任务上评估了我们的方法,我们表明了对现实世界接地的需求,并且这种方法能够在移动操纵器上完成长远,抽象的自然语言指令。该项目的网站和视频可以在https://say-can.github.io/上找到。
translated by 谷歌翻译
医疗AI通过支持基于证据的医学实践,个性化患者治疗,降低成本以及改善提供者和患者体验,推进医疗保健的巨大潜力。我们认为解锁此潜力需要一种系统的方法来衡量在大规模异构数据上的医疗AI模型的性能。为了满足这种需求,我们正在建立Medperf,这是一个开放的框架,用于在医疗领域的基准测试机器学习。 Medperf将使联合评估能够将模型安全地分配给不同的评估设施,从而赋予医疗组织在高效和人类监督过程中评估和验证AI模型的性能,同时优先考虑隐私。我们描述了当前的挑战医疗保健和AI社区面临,需要开放平台,Medperf的设计理念,其目前的实施状态和我们的路线图。我们呼吁研究人员和组织加入我们创建Medperf开放基准平台。
translated by 谷歌翻译
Recent progress in natural language processing has been driven by advances in both model architecture and model pretraining. Transformer architectures have facilitated building higher-capacity models and pretraining has made it possible to effectively utilize this capacity for a wide variety of tasks. Transformers is an open-source library with the goal of opening up these advances to the wider machine learning community. The library consists of carefully engineered stateof-the art Transformer architectures under a unified API. Backing this library is a curated collection of pretrained models made by and available for the community. Transformers is designed to be extensible by researchers, simple for practitioners, and fast and robust in industrial deployments. The library is available at https://github.com/ huggingface/transformers.
translated by 谷歌翻译
Fine-tuning pre-trained models has been ubiquitously proven to be effective in a wide range of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always yields an entirely new model for each task. Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks. These methods achieve surprisingly good performance and are shown to be more stable than their corresponding fully fine-tuned counterparts. However, such kind of methods is still not well understood. Some natural questions arise: How does the parameter sparsity lead to promising performance? Why is the model more stable than the fully fine-tuned models? How to choose the tunable parameters? In this paper, we first categorize the existing methods into random approaches, rule-based approaches, and projection-based approaches based on how they choose which parameters to tune. Then, we show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them. We indicate that the sparsity is actually imposing a regularization on the original model by controlling the upper bound of the stability. Such stability leads to better generalization capability which has been empirically observed in a lot of recent research works. Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters. To better choose the tunable parameters, we propose a novel Second-order Approximation Method (SAM) which approximates the original problem with an analytically solvable optimization function. The tunable parameters are determined by directly optimizing the approximation function. The experimental results show that our proposed SAM model outperforms many strong baseline models and it also verifies our theoretical analysis.
translated by 谷歌翻译